Minimax rate of convergence and the performance of empirical risk minimization in phase retrieval*

نویسندگان

  • Guillaume Lecué
  • Shahar Mendelson
چکیده

We study the performance of Empirical Risk Minimization in both noisy and noiseless phase retrieval problems, indexed by subsets of R and relative to subgaussian sampling; that is, when the given data is yi = 〈 ai, x0 〉2 +wi for a subgaussian random vector a, independent subgaussian noise w and a fixed but unknown x0 that belongs to a given T ⊂ R. We show that ERM performed in T produces x̂ whose Euclidean distance to either x0 or −x0 depends on the gaussian mean-width of T and on the signal-to-noise ratio of the problem. The bound coincides with the one for linear regression when ‖x0‖2 is of the order of a constant. In addition, we obtain a sharp lower bound for the phase retrieval problem. As examples, we study the class of d-sparse vectors in R and the unit ball in `1 .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast Learning Rates for Plug-in Classifiers

It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, i.e., the rates faster than n−1/2. The works on this subject suggested the following two conjectures: (i) the best achievable fast rate is of the order n−1, and (ii) the plug-in classifiers generally converge slower than the classifie...

متن کامل

Fast Learning Rates for Plug-in Classifiers by Jean-yves Audibert

It has been recently shown that, under the margin (or low noise) assumption, there exist classifiers attaining fast rates of convergence of the excess Bayes risk, that is, rates faster than n−1/2. The work on this subject has suggested the following two conjectures: (i) the best achievable fast rate is of the order n−1, and (ii) the plug-in classifiers generally converge more slowly than the cl...

متن کامل

Multivariate Dyadic Regression Trees for Sparse Learning Problems

We propose a new nonparametric learning method based on multivariate dyadic regression trees (MDRTs). Unlike traditional dyadic decision trees (DDTs) or classification and regression trees (CARTs), MDRTs are constructed using penalized empirical risk minimization with a novel sparsity-inducing penalty. Theoretically, we show that MDRTs can simultaneously adapt to the unknown sparsity and smooth...

متن کامل

Minimax adaptive dimension reduction for regression

In this paper, we address the problem of regression estimation in the context of a p-dimensional predictor when p is large. We propose a general model in which the regression function is a composite function. Our model consists in a nonlinear extension of the usual sufficient dimension reduction setting. The strategy followed for estimating the regression function is based on the estimation of ...

متن کامل

Robustness in portfolio optimization based on minimax regret approach

Portfolio optimization is one of the most important issues for effective and economic investment. There is plenty of research in the literature addressing this issue. Most of these pieces of research attempt to make the Markowitz’s primary portfolio selection model more realistic or seek to solve the model for obtaining fairly optimum portfolios. An efficient frontier in the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015